当前位置:运维
3台centos 1611版本虚拟机,mini安装。Linux localhost 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
docker version 1.13.1
etcd Version: 3.1.13
kubeadm,kubelet,kubectl,kube-cni版本如下:
kubelet-1.10.0-> 0.x86_64.rpm
kubeadm-1.10.0-0.x86_64.rpm
kubectl-1.10.0-> 0.x86_64.rpm
kubernetes-cni-0.6.0-0.x86_64.rpm
k8s网络组件:flannel:v0.10.0-amd64
实验网络规划如下:
host1 172.18.0.154/22
host2 172.18.0.155/22
host3 172.18.0.156/22
VIP 172.18.0.192/22
在3台主机中执行基础环境配置脚本 base-env-config.sh
在主机1执行脚本 host1-base-env.sh
在主机2执行脚本 host2-base-env.sh
在主机3执行脚本 host3-base-env.sh
在host1主机执行如下命令
[root@host1~]# ll /etc/etcd/ssl/
total 12
-rw-r--r--. 1 root root 1387 Mar 30 16:04 ca.pem
-rw-------. 1 root root 1675 Mar 30 16:04 etcd-key.pem
-rw-r--r--. 1 root root 1452 Mar 30 16:04 etcd.pem
[root@host1 ~]# scp -r /etc/etcd/ssl root@172.18.0.155:/etc/etcd/
[root@host1 ~]# scp -r /etc/etcd/ssl root@172.18.0.156:/etc/etcd/
在3台主机中分别执行脚本 etcd.sh
查看keepalived状态。如下图所示,则三个节点机之间心跳正常。
systemctl status keepalived
查看etcd运行状态。在host1,host2,host3分别执行如下命令:
etcdctl --endpoints=https://${NODE_IP}:2379 --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health
在3台主机上安装kubeadm,kubelet,kubctl,docker
yum install kubelet kubeadm kubectl kubernetes-cni docker -y
在3台主机禁用docker启动项SELinux
sed -i 's/--selinux-enabled/--selinux-enabled=false/g' /etc/sysconfig/docker
在3台主机的kubelet配置文件中添加如下参数
sed -i '9a\Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/osoulmate/pause-amd64:3.0"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
在3台主机添加docker加速器配置(可选)
cat < /etc/docker/daemon.json
{
"registry-mirrors": ["https://wcmntott.mirror.aliyuncs.com"]
}
EOF
在3台主机分别执行以下命令
systemctl daemon-reload
systemctl enable docker && systemctl restart docker
systemctl enable kubelet && systemctl restart kubelet
在3台主机中分别执行kubeadmconfig.sh生成配置文件config.yaml
在host1主机中首先执行kubeadm初始化操作,命令如下:
kubeadm init --config config.yaml
执行初始化后操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
将主机host1中kubeadm初始化后生成的证书拷贝至host2,host3相应目录下
scp -r /etc/kubernetes/pki root@172.18.0.155:/etc/kubernetes/
scp -r /etc/kubernetes/pki root@172.18.0.156:/etc/kubernetes/
为主机host1安装网络组件 podnetwork【这里选用flannel】
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
systemctl stop kubelet
systemctl restart docker
docker pull registry.cn-hangzhou.aliyuncs.com/osoulmate/flannel:v0.10.0-amd64
systemctl start kubelet
在host2,host3上执行kubeadm初始化操作
kubeadm init --config config.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
systemctl stop kubelet
systemctl restart docker
docker pull registry.cn-hangzhou.aliyuncs.com/osoulmate/flannel:v0.10.0-amd64
systemctl start kubelet
查看集群各节点状态
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
host1 Ready master 5m v1.10.0
host2 Ready master 1m v1.10.0
host3 Ready master 1m v1.10.0
[root@localhost ~]# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7997f8864c-k9dcx 1/1 Running 0 5m
kube-system coredns-7997f8864c-sv9rv 1/1 Running 0 5m
kube-system kube-apiserver-host1 1/1 Running 1 4m
kube-system kube-apiserver-host2 1/1 Running 0 1m
kube-system kube-apiserver-host3 1/1 Running 0 1m
kube-system kube-controller-manager-host1 1/1 Running 1 4m
kube-system kube-controller-manager-host2 1/1 Running 0 1m
kube-system kube-controller-manager-host3 1/1 Running 0 1m
kube-system kube-flannel-ds-88tz5 1/1 Running 0 1m
kube-system kube-flannel-ds-g9dpj 1/1 Running 0 2m
kube-system kube-flannel-ds-h58tp 1/1 Running 0 1m
kube-system kube-proxy-6fsvq 1/1 Running 1 5m
kube-system kube-proxy-g8xnb 1/1 Running 1 1m
kube-system kube-proxy-gmqv9 1/1 Running 1 1m
kube-system kube-scheduler-host1 1/1 Running 1 5m
kube-system kube-scheduler-host2 1/1 Running 1 1m
kube-system kube-scheduler-host3 1/1 Running 0 1m
高可用验证,将host1关机,在host3上执行
while true; do sleep 1; kubectl get node;date; done
在host2上观察keepalived是否已切换为主状态。
千知博客